4 research outputs found
Cost-Effective HITs for Relative Similarity Comparisons
Similarity comparisons of the form "Is object a more similar to b than to c?"
are useful for computer vision and machine learning applications.
Unfortunately, an embedding of points is specified by triplets,
making collecting every triplet an expensive task. In noticing this difficulty,
other researchers have investigated more intelligent triplet sampling
techniques, but they do not study their effectiveness or their potential
drawbacks. Although it is important to reduce the number of collected triplets,
it is also important to understand how best to display a triplet collection
task to a user. In this work we explore an alternative display for collecting
triplets and analyze the monetary cost and speed of the display. We propose
best practices for creating cost effective human intelligence tasks for
collecting triplets. We show that rather than changing the sampling algorithm,
simple changes to the crowdsourcing UI can lead to much higher quality
embeddings. We also provide a dataset as well as the labels collected from
crowd workers.Comment: 7 pages, 7 figure
Detecting the Starting Frame of Actions in Video
In this work, we address the problem of precisely localizing key frames of an
action, for example, the precise time that a pitcher releases a baseball, or
the precise time that a crowd begins to applaud. Key frame localization is a
largely overlooked and important action-recognition problem, for example in the
field of neuroscience, in which we would like to understand the neural activity
that produces the start of a bout of an action. To address this problem, we
introduce a novel structured loss function that properly weights the types of
errors that matter in such applications: it more heavily penalizes extra and
missed action start detections over small misalignments. Our structured loss is
based on the best matching between predicted and labeled action starts. We
train recurrent neural networks (RNNs) to minimize differentiable
approximations of this loss. To evaluate these methods, we introduce the Mouse
Reach Dataset, a large, annotated video dataset of mice performing a sequence
of actions. The dataset was collected and labeled by experts for the purpose of
neuroscience research. On this dataset, we demonstrate that our method
outperforms related approaches and baseline methods using an unstructured loss